276 research outputs found
A non-singular black hole model as a possible end-product of gravitational collapse
In this paper we present a non-singular black hole model as a possible
end-product of gravitational collapse. The depicted spacetime which is type
[II,(II)], by Petrov classification, is an exact solution of the Einstein
equations and contains two horizons. The equation of state in the radial
direction, is a well-behaved function of the density and smoothly reproduces
vacuum-like behavior near r=0 while tending to a polytrope at larger r, low
density, values. The final equilibrium configuration comprises of a de
Sitter-like inner core surrounded by a family of 2-surfaces of matter fields
with variable equation of state. The fields are all concentrated in the
vicinity of the radial center r=0. The solution depicts a spacetime that is
asymptotically Schwarzschild at large r, while it becomes de Sitter-like for
vanishing r. Possible physical interpretations of the macro-state of the black
hole interior in the model are offered. We find that the possible state admits
two equally viable interpretations, namely either a quintessential intermediary
region or a phase transition in which a two-fluid system is in both dynamic and
thermodynamic equilibrium. We estimate the ratio of pure matter present to the
total energy and in both (interpretations) cases find it to be virtually the
same, being 0.83. Finally, the well-behaved dependence of the density and
pressure on the radial coordinate provides some insight on dealing with the
information loss paradox.Comment: 12 Pages, 1 figure. Accepted for publication in Phys. Rev.
The K-Server Dual and Loose Competitiveness for Paging
This paper has two results. The first is based on the surprising observation
that the well-known ``least-recently-used'' paging algorithm and the
``balance'' algorithm for weighted caching are linear-programming primal-dual
algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that
generalizes them both and has an optimal performance guarantee for weighted
caching.
For the second result, the paper presents empirical studies of paging
algorithms, documenting that in practice, on ``typical'' cache sizes and
sequences, the performance of paging strategies are much better than their
worst-case analyses in the standard model suggest. The paper then presents
theoretical results that support and explain this. For example: on any input
sequence, with almost all cache sizes, either the performance guarantee of
least-recently-used is O(log k) or the fault rate (in an absolute sense) is
insignificant.
Both of these results are strengthened and generalized in``On-line File
Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA
(1991
An approximate binary-black-hole metric
An approximate solution to Einstein's equations representing two
widely-separated non-rotating black holes in a circular orbit is constructed by
matching a post-Newtonian metric to two perturbed Schwarzschild metrics. The
spacetime metric is presented in a single coordinate system valid up to the
apparent horizons of the black holes. This metric could be useful in numerical
simulations of binary black holes. Initial data extracted from this metric have
the advantages of being linked to the early inspiral phase of the binary
system, and of not containing spurious gravitational waves.Comment: 20 pages, 1 figure; some changes in Sec. IV B,C and Sec.
Factoring by electronic mail
Describes a distributed implementation of two factoring algorithms, the elliptic curve method (ECM) and the multiple polynomial quadratic sieve algorithm (MPQS). The authors' ECM-implementation on a network of DEC MicroVAX processors has factored several numbers from the Cunningham project. The authors have also implemented the multiple polynomial quadratic sieve algorithm on the same network. On this network alone, they are now able to factor any 100 digit integer, or to find 35 digit factors of numbers up to 150 digits long within one month. To allow an even wider distribution of their programs they made use of electronic mail networks for the distribution of the programs and for inter-processor communication. Even during the initial stage of this experiment, machines all over the United States and at various places in Europe and Australia contributed 15 percent of the total factorization effort. At all the sites where the program is running, the authors only use cycles that would otherwise have been idle. This shows that the enormous computational task of factoring 100 digit integers with the current algorithms can be completed almost for free. Since they use a negligible fraction of the idle cycles of all the machines on the worldwide electronic mail networks, the authors could factor 100 digit integers within a few days with a little more hel
Absorption of mass and angular momentum by a black hole: Time-domain formalisms for gravitational perturbations, and the small-hole/slow-motion approximation
The first objective of this work is to obtain practical prescriptions to
calculate the absorption of mass and angular momentum by a black hole when
external processes produce gravitational radiation. These prescriptions are
formulated in the time domain within the framework of black-hole perturbation
theory. Two such prescriptions are presented. The first is based on the
Teukolsky equation and it applies to general (rotating) black holes. The second
is based on the Regge-Wheeler and Zerilli equations and it applies to
nonrotating black holes. The second objective of this work is to apply the
time-domain absorption formalisms to situations in which the black hole is
either small or slowly moving. In the context of this small-hole/slow-motion
approximation, the equations of black-hole perturbation theory can be solved
analytically, and explicit expressions can be obtained for the absorption of
mass and angular momentum. The changes in the black-hole parameters can then be
understood in terms of an interaction between the tidal gravitational fields
supplied by the external universe and the hole's tidally-induced mass and
current quadrupole moments. For a nonrotating black hole the quadrupole moments
are proportional to the rate of change of the tidal fields on the hole's world
line. For a rotating black hole they are proportional to the tidal fields
themselves.Comment: 36 pages, revtex4, no figures, final published versio
On the factorization of RSA-120
We present data concerning the factorization of the 120-digit number RSA-120, which we factored on July 9, 1993, using the quadratic sieve method. The factorization took approximately 825 MIPS years and was completed within three months real time. At the time of writing RSA-120 is the largest integer ever factored by a general purpose factoring algorithm. We also present some conservative extrapolations to estimate the difficulty of factoring even larger numbers, using either the quadratic sieve method or the number field sieve, and discuss the issue of the crossover point between these two method
On the physical meaning of Fermi coordinates
(Some Latex problems should be removed in this version) Fermi coordinates
(FC) are supposed to be the natural extension of Cartesian coordinates for an
arbitrary moving observer in curved space-time. Since their construction cannot
be done on the whole space and even not in the whole past of the observer we
examine which construction principles are responsible for this effect and how
they may be modified. One proposal for a modification is made and applied to
the observer with constant acceleration in the two and four dimensional
Minkowski space. The two dimensional case has some surprising similarities to
Kruskal space which generalize those found by Rindler for the outer region of
Kruskal space and the Rindler wedge. In perturbational approaches the
modification leads also to different predictions for certain physical systems.
As an example we consider atomic interferometry and derive the deviation of the
acceleration-induced phase shift from the standard result in Fermi coordinates.Comment: 11 pages, KONS-RGKU-94/02 (Latex
Computerized neurocognitive training for improving dietary health and facilitating weight loss
Nearly 70% of Americans are overweight, in large part because of overconsumption of high-calorie foods such as sweets. Reducing sweets is difficult because powerful drives toward reward overwhelm inhibitory control (i.e., the ability to withhold a prepotent response) capacities. Computerized inhibitory control trainings (ICTs) have shown positive outcomes, but impact on real-world health behavior has been variable, potentially because of limitations inherent in existing paradigms, e.g., low in frequency, intrinsic enjoyment, personalization, and ability to adapt to increasing ability. The present study aimed to assess the feasibility, acceptability, and efficacy of a gamified and non-gamified, daily, personalized, and adaptive ICT designed to facilitate weight loss by targeting consumption of sweets. Participants (N = 106) were randomized to one of four conditions in a 2 (gamified vs. non-gamified) by 2 (ICT vs. sham) factorial design. Participants were prescribed a no-added-sugar diet and completed 42 daily, at-home trainings, followed by two weekly booster trainings. Results indicated that the ICTs were feasible and acceptable. Surprisingly, compliance to the 44 trainings was excellent (88.8%) and equivalent across both gamified and non-gamified conditions. As hypothesized, the impact of ICT on weight loss was moderated by implicit preference for sweet foods [F(1,95) = 6.17, p = .02] such that only those with higher-than-average implicit preference benefited (8-week weight losses for ICT were 3.1% vs. 2.2% for sham). A marginally significant effect was observed for gamification to reduce the impact of ICT. Implications of findings for continued development of ICTs to impact health behavior are discussed
- …